The Problem: The issue arose while provisioning a 2TB SAN LUN on a customer’s bare-metal Oracle Linux 8 server. The drive mounted as /dev/sdb but when running the command pvcreate /dev/sdb LVM responded with the error message “Device /dev/sdb excluded by a filter.” Nothing had been changed in the filter rules, but there was something on this specific drive that caused the LVM filter to block access to it.
The Constraints: Several constraints were present as Oracle Linux 8 was being used for production purposes on a strict change window basis. Therefore, it would not be possible to reboot the server or reset the entire LVM configuration. There must have been a default filter defined in lvm.conf which was probably put in place to prevent someone from performing unintended operations and had been blocking what I was trying to accomplish.
The Solution: The solution to this issue was to examine the verbose log files for the pvcreate command and determine exactly why the filter was blocking access to the device. In my situation, it was necessary to remove the remaining LVM signatures from the device using the wipefs command and then adjust the filter for this one-time initialization. After executing the above procedure, I successfully executed the pvcreate command and immediately made changes to the filter to tighten it beyond what existed prior to the initial problem.
Quick Summary
- Explanation of what the “device excluded by filter” error actually means.
- Detailed description of how to read/filter logic stated in
lvm.confand identify the cause of the filter blocking access. - Use of the straightforward
wipefsutility to erase older LVM, MD, and partition table metadata. - How to work safely with this scenario on multipath setups to avoid high availability split-brain scenarios.
Tested environment: Ubuntu 22.04 LTS (kernel 5.15) with LVM2 version 2.03.11. The procedures listed above will work on all versions of RHEL and its clones as well as Debian family distributions.
What Didn’t Work For Me
Before it made sense to me why the filter was blocking me from using the device, I tried the obvious: First, I deleted the partition table using fdisk /dev/sdb and ran the dd command to zero out the first 512 bytes of the disk. Neither of those worked; I received the same filter error back when trying to run pvcreate -ff to force create the logical volume on the disk. The reason those strategies failed was because LVM signatures can exist on disks outside of sector 0, within the first few kilobytes of the disk where volume group metadata is located. Eventually, I used the wipefs command to examine which signatures I had wiped from the disk and figured it out. Therefore, you should just use the wipefs command instead of running the dd command and wasting your time.
Root Cause Analysis: Why LVM Filters Exclude Devices
pvcreate cannot open device – The filter blockade
When running the command pvcreate, LVM attempts to open the block device specified by the command by running the block device through the global filter specified in the file /etc/lvm/lvm.conf. The global filter acts like a set of regular expressions that determine if you can use that device or not. If there is no accept rule referenced by the device and there is a reject rule that matches the device, LVM will throw an error that states “Excluded By A Filter” and will ignore that disk.
Confusing thing about the global filter is that you could be trying to use a brand new disk and it can still get blocked by the global filter; for example, you could be on a system where the default global filter contains only those devices associated with multipath mappers and rejects raw devices located in /dev/sd*.You could also be looking at an old disk with old metadata from a previous LVM setup so filter thinks it is a duplicate.
lvm.conf filter configuration decoded
The syntax of a filter is simple once you see it once. The filter is a series of strings describing what to accept with an a and reject with an r. They are processed in the order listed so the first match wins.
The factory default filter on a clean install of Ubuntu may look like this:
filter = [ "a|.*|" ]
This tells LVM to accept all devices. If you are on a multipath-enabled server, your filter may very well look much more strict, such as below:
filter = [ "a|/dev/mapper/.*|", "r|.*|" ]
This example shows devices in the device-mapper directory of /dev/mapper/ will be accepted while devices found in plain /dev/sdb will be ignored. As referenced in the lvm.conf man page, you can have multiple patterns in the same filter, but know that the first pattern that matches is what will determine whether or not the device is accepted.
Linux partition table remnants causing rejection
This is where the filter may sneak in. Even when your filter does accept “raw” SCSI disks, the means by which LVM discovers devices may recognize the old LVM2 labels or remnants of an old MBR/GPT. Since that leftover metadata would create a multipath map, your filter that only accepts devices in /dev/mapper/.* would explicitly reject the underlying /dev/sdb device, creating an exclusion error.
In conclusion, old partition tables and stale LVM labels can indirectly cause filter blocks based upon how the kernel counts the device by changing its presentation.
Decoding the “initialize physical volume error” during pvcreate
If you see the message ‘Device /dev/sdb excluded by a filter’, then LVM is letting you know that your device was excluded from consideration and therefore could not be used to create a new physical volume because it did not pass your filters prior to checking for metadata on your device. An “Already contains physical volume” warning is different. If you’re seeing the filter line, then concentrate on the filters and your device name, rather than focusing on wiping out your PV Header.
You can confirm this by running pvcreate with a higher verbosity setting.
$ sudo pvcreate -vvv /dev/sdb 2>&1 | grep "filter"
# Output will show each filter rule tested. A line like
# /dev/sdb: Skipping (filter) <-- this is the smoking gun
Fix LVM Device Excluded by a Filter
Step 1: Identify the exact error with verbose pvcreate
Getting the whole picture of what occurred is important — you can do this by running pvcreate in debug mode, and let the logs show you what rule was used to reject your device.
$ sudo pvcreate -vvv /dev/sdb 2>&1 | tee pvcreate-debug.log
When you run pvcreate in verbose mode, search through your output for lines containing the text ‘filter’. You will discover a series of regex directives, followed by a final line that says ‘(Filter) Skipped’. This indicates whether you are missing an accept rule or have a rule that expressly rejects your device.
Step 2: Wipefs clear signatures to remove old metadata
In order to remove all references to previously saved signatures that could confuse your system, use the wipefs command (available in util-linux).
$ sudo wipefs -a /dev/sdb
wipefs: /dev/sdb: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31 <-- removed old LVM label
wipefs: /dev/sdb: 8 bytes were erased at offset 0x00000000 (dos): 55 aa
wipefs: /dev/sdb: 2 bytes were erased at offset 0x000001fe (dos): 55 aa <-- killed MBR signature
wipefs: /dev/sdb: calling ioctl to re-read partition table: Success
As shown in the example above, wipefs deleted an old LVM member label and a DOS partition table header from your device—both of which would be flagged as junk by the filter when used with multipath deployments.Once the disk has been wiped, confirm that it’s really completely empty:
$ sudo wipefs -n /dev/sdb # no output means no signatures remain
Step 3: Modify lvm.conf filter to accept the device
We must ensure that the LVM filter is not preventing access to the device. Backup the lvm.conf file prior to modifying it:
$ sudo cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.bak
Edit the filter line in the lvm.conf file to allow for all devices to be configured by LVM during this operation only. Set the filter line to look like this:
filter = [ "a|.*|" ] <-- accept all block devices
Important: The filter line must replace any filter lines currently defined in the devices section of lvm.conf. Do not append additional filter entries to the previous filter; LVM will always read and execute the last filter it finds.
After saving the changes to the config file, check to ensure that LVM is aware of the changes:
$ sudo lvmconfig --type full | grep filter
filter="a|.*|"
There is no need to restart the service; LVM will automatically read the new configuration file and get any new settings on the next command.
Step 4: Run pvcreate again and verify
After wiping the disk and allowing LVM access through the changed configuration to the device, pvcreate should now complete successfully.
$ sudo pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
Verify this:
$ sudo pvs /dev/sdb
PV VG Fmt Attr PSize PFree
/dev/sdb lvm2 --- 2.00t 2.00t
Lastly, return the filter line in the lvm.conf file back to its original configuration. Either copy the backup file back to its original location or set a permanent targeted configuration for both the raw device used to create the new volume group and for any multipath configurations.
Edge Cases & Advanced Debugging: When the Fix Isn’t Enough
Multipath device LVM exclusion and correct filter syntax
If you are running servers that have DM-Multipath installed, the filter must explicitly include the mapper path and exclude the raw SCSI path.The syntax used for using LVM filters looks like this:
filter = [ "a|/dev/mapper/mpath.*|", "r|.*|" ]
If you mistakenly reversed the order of the filters or did not include the reject-all guard, then you could also end up with a device in the /dev/sd* format that is already being used by multipath. See the Red Hat documentation on LVM filters for clear examples. You can also run the command multipath -ll first to see which maps are already set up for multipath.
vgcreate physical extent fail after successful pvcreate
It is possible to successfully run pvcreate but then receive an error related to physical extent size when running vgcreate. This error indicates a mismatch between the extent size you specified (or the default) and what the new physical volume can support; it has nothing to do with a “PE size of the PV” – PVs themselves don’t have a fixed extent size, that’s a volume group property. When you create a new VG, you can set the extent size with the -s flag, for example:
$ sudo vgcreate -s 4M vg_data /dev/sdb
If you forget to include the -s flag during vgcreate and then run into a problem later (like when extending the VG), you can simply run vgextend with the same extent size or recreate the VG with the correct value—no need to wipe the PV.
Undocumented workaround: Forcing global filter reload without a reboot
You may have heard it said that rebooting is required in order for the new filter settings to take effect. This statement is misleading. Every time you run an LVM command LVM reads the configuration file lvm.conf so if you run pvcreate again immediately after updating the file the new filter rule is applied immediately. The only time a reboot may be necessary is if you have inserted a filter rule into the configuration file that allows for certain types of devices to be cached in the lvmetad daemon prior to its restart (i.e. RHEL 6 and RHEL 7). If you did insert a filter rule and would like to use it, then restart the lvmetad daemon using the command systemctl restart lvm2-lvmetad but keep in mind that in modern versions of LVM there is no lvmetad daemon; thus the filter will always be live.You can re-scan your device(s) using the method shown below to ensure that you are refreshing any stale device nodes.
$ sudo udevadm trigger --action=change
$ sudo pvscan --cache
Prevention & Best Practices: Avoid Filter Pitfalls
Standardize LVM filter rules for mixed‑storage environments
Create a filter that matches your expected device naming convention. For instance, if you have local SSDs and SAN multipath volumes, the following filter keeps it clean:
filter = [ "a|/dev/sd[ab]|", "a|/dev/mapper/mpath.*|", "r|.*|" ]
This allows for the first two SCSI disks and all multipath volumes, while rejecting everything else. Adapt the regex to your hardware and add documentation to your Baseline Configuration Management. This one line has saved me from a lot of late-night calls.
Routine wipefs to clear Linux partition table remnants before commissioning
When you’re ready to give the disk back to LVM, take a little time to do wipefs -a /dev/sdX. It wipes out a DOS, GPT, LVM2, MDADM, and filesystem signatures all at once. I now use this as part of my Server Provisioning Playbooks. It takes the guesswork out of “Why is LVM saying this shiny new disk can’t be used?” All the old garbage won’t cause any problems with LVM scanning.
Frequently Asked Questions
Why does pvcreate still fail after fixing the LVM filter?
If you cleared the LVM filter and pvcreate is still failing, check that you did not have the disk bound to the multipath. For example, run multipath -ll /dev/sdb, if it returns a map you will need to either unbind the multipath (multipath -f) or modify the filter to allow multipath devices, then run pvcreate on the mapper node.
How do I identify which device is excluded by the filter in a multipath setup?
Use verbose output from pvcreate. Look for Skipping (filter) lines in the output. In a multipath environment, the raw /dev/sd* device will fail and the /dev/mapper/mpath* device will succeed. To directly see the filter evaluation use lvmdevices --adddev /dev/sdb (if lvmdevices is enabled) to tell you why the device cannot be used.
Can I use a device rejected by the LVM filter for a non‑LVM purpose without data loss?
Yes, you can. The filter merely controls whether LVM will manage the device or not. You may still mount the device and create an ordinary filesystem on it or use the device as a raw block device. Just leave the LVM filter alone and don’t use pvcreate and the data remains safe.